synaptic efficacy
Pulse shape discrimination based on the Tempotron: a powerful classifier on GPU
Liu, Haoran, Li, Peng, Liu, Ming-Zhe, Wang, Kai-Ming, Zuo, Zhuo, Liu, Bing-Qi
This study introduces the Tempotron, a powerful classifier based on a third-generation neural network model, for pulse shape discrimination. By eliminating the need for manual feature extraction, the Tempotron model can process pulse signals directly, generating discrimination results based on learned prior knowledge. The study performed experiments using GPU acceleration, resulting in over a 500 times speedup compared to the CPU-based model, and investigated the impact of noise augmentation on the Tempotron's performance. Experimental results showed that the Tempotron is a potent classifier capable of achieving high discrimination accuracy. Furthermore, analyzing the neural activity of Tempotron during training shed light on its learning characteristics and aided in selecting the Tempotron's hyperparameters. The dataset used in this study and the source code of the GPU-based Tempotron are publicly available on GitHub at https://github.com/HaoranLiu507/TempotronGPU.
- Asia > China > Sichuan Province > Chengdu (0.05)
- North America > United States > Gulf of Mexico > Central GOM (0.04)
- Energy (0.67)
- Health & Medicine (0.46)
Effective Learning Requires Neuronal Remodeling of Hebbian Synapses
This paper revisits the classical neuroscience paradigm of Hebbian learning. We find that a necessary requirement for effective as(cid:173) sociative memory learning is that the efficacies of the incoming synapses should be uncorrelated. This requirement is difficult to achieve in a robust manner by Hebbian synaptic learning, since it depends on network level information. Effective learning can yet be obtained by a neuronal process that maintains a zero sum of the in(cid:173) coming synaptic efficacies. This normalization drastically improves the memory capacity of associative networks, from an essentially bounded capacity to one that linearly scales with the network's size.
Spiking neurons can discover predictive features by aggregate-label learning
To implement aggregate-label learning, I calculated how neurons should modify their synaptic efficacies in order to most effectively adjust their number of output spikes. Because a neuron's discrete number of spikes does not provide a direction of gradual improvement, I derived the multi-spike tempotron learning rule in an abstract space of continuous spike threshold variables. In this space, changes in synaptic efficacies are directed along the steepest path, reducing the discrepancy between a neuron's fixed biological spike threshold and the closest hypothetical threshold at which the neuron would fire a desired number of spikes. With the resulting synaptic learning rule, aggregate-label learning enabled simple neuron models to solve the temporal credit assignment problem. Neurons reliably identified all clues whose occurrences contributed to a delayed feedback signal.
Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
Levina, Anna, Herrmann, Michael
There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters.
- Europe > Germany > Lower Saxony > Gottingen (0.05)
- North America > United States > Texas > Clay County (0.04)
Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
Levina, Anna, Herrmann, Michael
There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters.
- Europe > Germany > Lower Saxony > Gottingen (0.05)
- North America > United States > Texas > Clay County (0.04)
Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
Levina, Anna, Herrmann, Michael
There is experimental evidence that cortical neurons show avalanche activity withthe intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters.
- Europe > Germany > Lower Saxony > Gottingen (0.05)
- North America > United States > Texas > Clay County (0.04)
Learning in Spiking Neural Assemblies
We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Jordan (0.04)
Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
In these types of synapses, the short-term dynamics of the synaptic efficacies are governed by the relative timing of the pre-and post-synaptic spikes, while on long time scales the efficacies tend asymptotically to either a potentiated state or to a depressed one. We fabricated a prototype VLSI chip containing a network of integrate and fire neurons interconnected via bistable STDP synapses. Test results from this chip demonstrate the synapse's STDP learning properties, and its long-term bistable characteristics.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
In these types of synapses, the short-term dynamics of the synaptic efficacies are governed by the relative timing of the pre-and post-synaptic spikes, while on long time scales the efficacies tend asymptotically to either a potentiated state or to a depressed one. We fabricated a prototype VLSI chip containing a network of integrate and fire neurons interconnected via bistable STDP synapses. Test results from this chip demonstrate the synapse's STDP learning properties, and its long-term bistable characteristics.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > Middle East > Jordan (0.04)
Learning in Spiking Neural Assemblies
We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Middle East > Jordan (0.04)